TAKE IT DOWN Act Becomes Law, Introducing Landmark Federal Protections to Combat Online Exploitation and Deepfakes


4 minute read | May.21.2025

On May 19, President Donald Trump signed into law the bipartisan-supported TAKE IT DOWN Act (S.146).[1] While almost forty U.S. states have enacted some form of legislation targeting online abuse, the TAKE IT DOWN Act is the first significant bipartisan federal legislation to provide a baseline level of protection against the spread of non-consensual intimate imagery (NCII), including AI-generated deepfake pornography and so-called “revenge porn.”[2]

Digital platforms have one year, until May 19, 2026, to establish appeal and removal processes.

What is the TAKE IT DOWN Act?

The TAKE IT DOWN Act criminalizes the publication of NCII as well as realistic, computer-generated intimate images depicting identifiable individuals without their consent. It also mandates that social media platforms, online forums, hosting services and other tech companies that facilitate user-generated content remove such content within 48 hours of a valid request from the affected individual. The Federal Trade Commission is empowered to enforce these requirements and hold online platforms accountable, treating non-compliance as a deceptive or unfair practice under federal consumer protection law.

Key Provisions

  • Criminalization of NCII Publication: Knowingly publishing NCII without consent is now a federal criminal offense. For crimes against adult victims, penalties can include up to two years in prison; for crimes against minors, up to three years.
  • 48-Hour Takedown Requirement: Platforms must remove reported content quickly — within 48 hours of a verified request — and take reasonable steps to prevent it from being reposted.
  • Protection for Good Faith Actions: The Act includes exceptions for disclosures of intimate imagery made in good faith or for lawful purposes, such as law enforcement investigations, legal proceedings, medical treatment and reporting unlawful conduct.  

What does this mean for online platforms?

The Act introduces significant new legal obligations for online platforms, particularly those that host or distribute user-generated content. These changes will require most platforms to reassess their content moderation workflows, takedown procedures and liability exposure.

1. Easily accessible complaint process and prompt takedown is required

Under the new rules, platforms must:

  • Provide a clear and accessible complaint process for users to report NCII, request removal and allow complainants to verify their identity securely.
  • Take down reported NCII within 48 hours of receiving a valid request.
  • Remove any identical copies of the same image.

2.  Proactive monitoring is not required, but may be advisable

The Act does not explicitly require platforms to proactively monitor or filter all content, however it does require “reasonable efforts to identify and remove any known identical copies” of the intimate visual depiction. It is unclear how “reasonable efforts” will be interpreted, so platforms are well-advised to consider using duplicate detection tools/image hashing. Additionally, although the Act does not create a continuing duty to police future content uploads beyond the removal of the reported image and identical copies specifically identified in valid reports, broader moderation practices may be helpful in mitigating legal and reputational risks with enforcement authorities and users.

3. Section 230 implications

  • Section 230 protections remain intact, generally.
  • FTC enforcement is tied to notice and takedown and not content, and FTC enforcement under this Act should not be impacted by Section 230.
  • Courts may consider whether a platform has demonstrated good faith efforts to comply in enforcement proceedings, but this is not a formal defense.

What platforms should do now?

Online platforms should begin assessing their readiness now in anticipation of the May 19, 2026 compliance deadline to reduce risk exposure, strengthen compliance, and signal a proactive commitment to user safety. Online platforms that host images, videos, or social content should:

  1. Update Terms of Service to reflect zero-tolerance policy for nonconsensual and AI-generated intimate imagery.
  2. Revise moderation workflows to ensure (1) a 48-hour turnaround for verified takedown requests, and (2) identification and removal of any identical copies of the same image.
  3. Audit user reporting and flagging tools to confirm they are accessible, timely and effective.
  4. Train content moderation teams on how to handle sensitive NCII reports and differentiate them from lawful content.
  5. Considering implementing tools to prevent known content from resurfacing.
  6. Designate a compliance contact or team responsible for liaising with the FTC or law enforcement.

If you have questions about the TAKE IT DOWN Act, reach out to the authors Aravind Swaminathan, Jake Heath, Meg Hennessey, David Curtis, Ryann McMurry, or Tom Zick.



[1] Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act

[2] Digital forgeries—the Act’s terminology for AI-generated deepfakes—have made it possible to fabricate pornographic images of individuals without their consent, even when no such original content exists.